Goto

Collaborating Authors

 replication crisis


Supercharging academic writing with generative AI: framework, techniques, and caveats

Lin, Zhicheng

arXiv.org Artificial Intelligence

Academic writing is an indispensable yet laborious part of the research enterprise. This Perspective maps out principles and methods for using generative artificial intelligence (AI), specifically large language models (LLMs), to elevate the quality and efficiency of academic writing. We introduce a human-AI collaborative framework that delineates the rationale (why), process (how), and nature (what) of AI engagement in writing. The framework pinpoints both short-term and long-term reasons for engagement and their underlying mechanisms (e.g., cognitive offloading and imaginative stimulation). It reveals the role of AI throughout the writing process, conceptualized through a two-stage model for human-AI collaborative writing, and the nature of AI assistance in writing, represented through a model of writing-assistance types and levels. Building on this framework, we describe effective prompting techniques for incorporating AI into the writing routine (outlining, drafting, and editing) as well as strategies for maintaining rigorous scholarship, adhering to varied journal policies, and avoiding overreliance on AI. Ultimately, the prudent integration of AI into academic writing can ease the communication burden, empower authors, accelerate discovery, and promote diversity in science.


Could machine learning fuel a reproducibility crisis in science?

#artificialintelligence

A CT scan of a tumor in human lungs. Researchers are experimenting with AI algorithms that can spot early signs of the disease.Credit: K. H. Fung/SPL From biomedicine to political sciences, researchers increasingly use machine learning as a tool to make predictions on the basis of patterns in their data. But the claims in many such studies are likely to be overblown, according to a pair of researchers at Princeton University in New Jersey. They want to sound an alarm about what they call a "brewing reproducibility crisis" in machine-learning-based sciences. Machine learning is being sold as a tool that researchers can learn in a few hours and use by themselves -- and many follow that advice, says Sayash Kapoor, a machine-learning researcher at Princeton.


Could machine learning fuel a reproducibility crisis in science?

#artificialintelligence

A CT scan of a tumor in human lungs. Researchers are experimenting with AI algorithms that can spot early signs of the disease.Credit: K. H. Fung/SPL From biomedicine to political sciences, researchers increasingly use machine learning as a tool to make predictions on the basis of patterns in their data. But the claims in many such studies are likely to be overblown, according to a pair of researchers at Princeton University in New Jersey. They want to sound an alarm about what they call a "brewing reproducibility crisis" in machine-learning-based sciences. Machine learning is being sold as a tool that researchers can learn in a few hours and use by themselves -- and many follow that advice, says Sayash Kapoor, a machine-learning researcher at Princeton.


Last Week in AI #91: AI's replication crisis, reddit discussions, government-sponsored medical AI

#artificialintelligence

Find this and more in our text version of this news roundup: https://lastweekin.ai/p/last-week-in-ai-91 Music: Deliberate Thought, Inspired by Kevin MacLeod (incompetech.com)


AI is wrestling with a replication crisis

#artificialintelligence

In practice, few studies are fully replicated because most researchers are more interested in producing new results than reproducing old ones. But in fields like biology and physics--and computer science overall--researchers are typically expected to provide the information needed to rerun experiments, even if those reruns are rare. AI is feeling the heat for several reasons. For a start, it is a newcomer. It has only really become an experimental science in the past decade, says Joelle Pineau, a computer scientist at Facebook AI Research and McGill University, who coauthored the complaint.


Threats of a Replication Crisis in Empirical Computer Science

Communications of the ACM

Andy Cockburn (andy.cockburn@canterbury.ac.nz) is a professor at the University of Cantebury, Christchurch, New Zealand, where he is head of the HCI and Multimedia Lab. Pierre Dragicevic is a research scientist at Inria, Orsay, France.


The Flawed Reasoning Behind the Replication Crisis - Issue 74: Networks

Nautilus

Suppose we scan 1 million similar women, and we tell everyone who tests positive that they have cancer. Then we will have correctly told all 10,000 women with cancer that they have it. Of the remaining 990,000 women whose lumps were benign, we will incorrectly tell 49,500 women that they have cancer. Therefore, of the women we identify as having cancer, about 83 percent will have been incorrectly diagnosed. Imagine you or a loved one received a positive test result.


Darpa Wants to Solve Science's Replication Crisis With Robots

WIRED

Say this much for the "reproducibility crisis" in science: It's poorly timed. At the same instant that a significant chunk of elected and appointed policymakers seem to disbelieve the science behind global warming, and a significant chunk of parents seem to disbelieve the science behind vaccines … a bunch of actual scientists come along and point out that vast swaths of the social sciences don't stand up to scrutiny. They don't replicate--which is to say, if someone else does the same experiment, they get different (often contradictory) results. The scientific term for that is bad. What's good, though, is that the scientific method is built for self-correction.


What has happened down here is the winds have changed - Statistical Modeling, Causal Inference, and Social Science

#artificialintelligence

Someone sent me this article by psychology professor Susan Fiske, scheduled to appear in the APS Observer, a magazine of the Association for Psychological Science. The article made me a little bit sad, and I was inclined to just keep my response short and sweet, but then it seemed worth the trouble to give some context. I'll first share the article with you, then give my take on what I see as the larger issues. The title and headings of this post allude to the fact that the replication crisis has redrawn the topography of science, especially in social psychology, and I can see that to people such as Fiske who'd adapted to the earlier lay of the land, these changes can feel catastrophic. I will not be giving any sort of point-by-point refutation of Fiske's piece, because it's pretty much all about internal goings-on within the field of psychology (careers, tenure, smear tactics, people trying to protect their labs, public-speaking sponsors, career-stage vulnerability), and I don't know anything about this, as I'm an outsider to psychology and I've seen very little of this sort of thing in statistics or political science. As I don't know enough about the academic politics of psychology to comment on most of what Fiske writes about, so what I'll mostly be talking about is how her attitudes, distasteful as I find them both in substance and in expression, can be understood in light of the recent history of psychology and its replication crisis. In short, Fiske doesn't like when people use social media to publish negative comments on published research. She's implicitly following what I've sometimes called the research incumbency rule: that, once an article is published in some approved venue, it should be taken as truth.